Goto

Collaborating Authors

 measure space





Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and EM

Neural Information Processing Systems

Many problems in machine learning can be formulated as optimizing a convex functional over a vector space of measures. This paper studies the convergence of the mirror descent algorithm in this infinite-dimensional setting. Defining Bregman divergences through directional derivatives, we derive the convergence of the scheme for relatively smooth and convex pairs of functionals. Such assumptions allow to handle non-smooth functionals such as the Kullback--Leibler (KL) divergence. Applying our result to joint distributions and KL, we show that Sinkhorn's primal iterations for entropic optimal transport in the continuous setting correspond to a mirror descent, and we obtain a new proof of its (sub)linear convergence. We also show that Expectation Maximization (EM) can always formally be written as a mirror descent. When optimizing only on the latent distribution while fixing the mixtures parameters -- which corresponds to the Richardson--Lucy deconvolution scheme in signal processing -- we derive sublinear rates of convergence.





Conic Formulations of Transport Metrics for Unbalanced Measure Networks and Hypernetworks

Oliver, Mary Chriselda Antony, Hartman, Emmanuel, Needham, Tom

arXiv.org Machine Learning

The Gromov-Wasserstein (GW) variant of optimal transport, designed to compare probability densities defined over distinct metric spaces, has emerged as an important tool for the analysis of data with complex structure, such as ensembles of point clouds or networks. To overcome certain limitations, such as the restriction to comparisons of measures of equal mass and sensitivity to outliers, several unbalanced or partial transport relaxations of the GW distance have been introduced in the recent literature. This paper is concerned with the Conic Gromov-Wasserstein (CGW) distance introduced by S ejourn e, Vialard, and Peyr e [35]. We provide a novel formulation in terms of semi-couplings, and extend the framework beyond the metric measure space setting, to compare more general network and hypernetwork structures. With this new formulation, we establish several fundamental properties of the CGW metric, including its scaling behavior under dilation, variational convergence in the limit of volume growth constraints, and comparison bounds with established optimal transport metrics. We further derive quantitative bounds that characterize the robustness of the CGW metric to perturbations in the underlying measures. The hypernetwork formulation of CGW admits a simple and provably convergent block coordinate ascent algorithm for its estimation, and we demonstrate the computational tractability and scalability of our approach through experiments on synthetic and real-world high-dimensional and structured datasets.



Defining neurosymbolic AI

De Smet, Lennert, De Raedt, Luc

arXiv.org Artificial Intelligence

Neurosymbolic AI focuses on integrating learning and reasoning, in particular, on unifying logical and neural representations. Despite the existence of an alphabet soup of neurosymbolic AI systems, the field is lacking a generally accepted formal definition of what neurosymbolic models and inference really are. We introduce a formal definition for neurosymbolic AI that makes abstraction of its key ingredients. More specifically, we define neurosymbolic inference as the computation of an integral over a product of a logical and a belief function. We show that our neurosymbolic AI definition makes abstraction of key representative neurosymbolic AI systems.